Goto

Collaborating Authors

 bidirectional propagation


Scalable Graph Neural Networks via Bidirectional Propagation

Neural Information Processing Systems

Graph Neural Networks (GNN) are an emerging field for learning on non-Euclidean data. Recently, there has been increased interest in designing GNN that scales to large graphs. Most existing methods use graph sampling or layer-wise sampling techniques to reduce training time; However, these methods still suffer from degrading performance and scalability problems when applying to graphs with billions of edges.


Review for NeurIPS paper: Scalable Graph Neural Networks via Bidirectional Propagation

Neural Information Processing Systems

This paper presents a solution to learn GNNs on large scale graphs efficiently, gaining substantial speed improvement without sacrificing model quality. A couple reviewers raised the issue that there is a parallel work published at KDD using a similar approach solving the same problem, however since the KDD paper is published after the submission of this paper this shouldn't be used as evidence to discredit the contribution of this paper. On the other side I do hope the authors can include a discussion of this related work and improve the clarity issues raised by the reviewers in the final version.


Review for NeurIPS paper: Scalable Graph Neural Networks via Bidirectional Propagation

Neural Information Processing Systems

Additional Feedback: Major comments: * Authors should discuss the limitations of their approach, e.g.: * Can one say how much performance is lost (if any) due to decoupling predictions from propgations, e.g. Is GBP potentially suited for incorporating those? For example, SGC can achieve competitive performance on PPI when increasing the feature dimensionality to 1024 or 2048. Did the authors tune the reported baselines? Minor comments/questions: * Line 40: "In theory, this complexity is undesirable for scalable GNNs": The authors should discuss this in more depth.


Scalable Graph Neural Networks via Bidirectional Propagation

Neural Information Processing Systems

Graph Neural Networks (GNN) are an emerging field for learning on non-Euclidean data. Recently, there has been increased interest in designing GNN that scales to large graphs. Most existing methods use "graph sampling" or "layer-wise sampling" techniques to reduce training time; However, these methods still suffer from degrading performance and scalability problems when applying to graphs with billions of edges. Theoretical analysis shows that GBP is the first method that achieves sub-linear time complexity for both the precomputation and the training phases. An extensive empirical study demonstrates that GBP achieves state-of-the-art performance with significantly less training/testing time.


Online Time-Informed Kinodynamic Motion Planning of Nonlinear Systems

Meng, Fei, Liu, Jianbang, Shi, Haojie, Ma, Han, Ren, Hongliang, Meng, Max Q. -H.

arXiv.org Artificial Intelligence

Sampling-based kinodynamic motion planners (SKMPs) are powerful in finding collision-free trajectories for high-dimensional systems under differential constraints. Time-informed set (TIS) can provide the heuristic search domain to accelerate their convergence to the time-optimal solution. However, existing TIS approximation methods suffer from the curse of dimensionality, computational burden, and limited system applicable scope, e.g., linear and polynomial nonlinear systems. To overcome these problems, we propose a method by leveraging deep learning technology, Koopman operator theory, and random set theory. Specifically, we propose a Deep Invertible Koopman operator with control U model named DIKU to predict states forward and backward over a long horizon by modifying the auxiliary network with an invertible neural network. A sampling-based approach, ASKU, performing reachability analysis for the DIKU is developed to approximate the TIS of nonlinear control systems online. Furthermore, we design an online time-informed SKMP using a direct sampling technique to draw uniform random samples in the TIS. Simulation experiment results demonstrate that our method outperforms other existing works, approximating TIS in near real-time and achieving superior planning performance in several time-optimal kinodynamic motion planning problems.


Bidirectional Learning for Robust Neural Networks

Pontes-Filho, Sidney, Liwicki, Marcus

arXiv.org Machine Learning

A multilayer perceptron can behave as a generative classifier by applying bidirectional learning (BL). It consists of training an undirected neural network to map input to output and vice-versa; therefore it can produce a classifier in one direction, and a generator in the opposite direction for the same data. In this paper, two novel learning techniques are introduced which use BL for improving robustness to white noise static and adversarial examples. The first method is bidirectional propagation of errors, which the error propagation occurs in backward and forward directions. Motivated by the fact that its generative model receives as input a constant vector per class, we introduce as a second method the hybrid adversarial networks (HAN). Its generative model receives a random vector as input and its training is based on generative adversarial networks (GAN). To assess the performance of BL, we perform experiments using several architectures with fully and convolutional layers, with and without bias. Experimental results show that both methods improve robustness to white noise static and adversarial examples, but have different behaviour depending on the architecture and task, being more beneficial to use the one or the other. Nevertheless, HAN using a convolutional architecture with batch normalization presents outstanding robustness, reaching state-of-the-art accuracy on adversarial examples of hand-written digits.